297 research outputs found

    Tuning struggle strategy in genetic algorithms for scheduling in computational grids

    Get PDF
    Job Scheduling on Computational Grids is gaining importance due to the need for efficient large-scale Grid-enabled applications. Among different optimization techniques addressed for the problem, Genetic Algorithm (GA) is a popular class of solution methods. As GAs are high level algorithms, specific algorithms can be designed by choosing the genetic operators as well as the evolutionary strategies. In this paper we focus on Struggle GAs and their tuning for the scheduling of independent jobs in computational grids. Our results showed that a careful hash implementation for computing the similarity of solutions was able to alleviate the computational burden of Struggle GA and perform better than standard similarity measures.Peer ReviewedPostprint (published version

    Innovations in nature inspired optimization and learning methods

    Get PDF
    The nine papers included in this special issue represent a selection of extended contributions presented at the Third World Congress on Nature and Biologically Inspired Computing (NaBIC2011), held in Salamanca, Spain, October 19–21, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Nature and Biologically Inspired Computing Models to solving real-world problems. The papers are organized as follows. The first paper by Apeh et al. present a comparative investigation of 4 approaches for classifying dynamic customer profiles built using evolving transactional data over time. The changing class values of the customer profiles were analyzed together with the challenging problem of deciding whether to change the class label or adapt the classifier. The results from the experiments conducted on a highly sparse and skewed real-world transactional data show that adapting the classifiers leads to more stable classification of customer profiles in the shorter time windows; while relabelling the changed customer profile classes leads to more accurate and stable classification in the longer time windows. Frolov et al. suggested in the second paper a new approach to Boolean factor analysis, which is an extension of the previously proposed Boolean factor analysis method: Hopfield-like attractor neural network with increasing activity. The authors increased its applicability and robustness when complementing this method by a maximization of the learning set likelihood function defined according to the Noisy-OR generative model. They demonstrated the efficiency of the new method using the data set generated according to the model. Successful application of the method to the real data is shown when analyzing the data from the Kyoto Encyclopedia of Genes and Genomes database which contains full genome sequencing for 1368 organisms. In the sequel, Triguero et al. analyze the integration of a wide variety of noise filters into the self-training process to distinguish the most relevant features of filters. They are focused on the nearest neighbour rule as a base classifier and ten different noise filters. Then, they provide an extensive analysis of the performance of these filters considering different ratios of labelled data. The results are contrasted with nonparametric statistical tests that allow us to identify relevant filters, and their main characteristics, in the field of semi-supervised learning. In the Fourth paper, Gutiérrez-Avilés et al. present the TriGen algorithm, a genetic algorithm that finds triclusters of gene expression that take into account the experimental conditions and the time points simultaneously. The authors have used TriGen to mine datasets related to synthetic data, yeast (Saccharomyces Cerevisiae) cell cycle and human inflammation and host response to injury experiments. TriGen has proved to be capable of extracting groups of genes with similar patterns in subsets of conditions and times, and these groups have shown to be related in terms of their functional annotations extracted from the Gene Ontology project. In the following paper, Varela et al. introduce and study the application of Constrained Sampling Evolutionary Algorithms in the framework of an UAV based search and rescue scenario. These algorithms have been developed as a way to harness the power of Evolutionary Algorithms (EA) when operating in complex, noisy, multimodal optimization problems and transfer the advantages of their approach to real time real world problems that can be transformed into search and optimization challenges. These types of problems are denoted as Constrained Sampling problems and are characterized by the fact that the physical limitations of reality do not allow for an instantaneous determination of the fitness of the points present in the population that must be evolved. A general approach to address these problems is presented and a particular implementation using Differential Evolution as an example of CS-EA is created and evaluated using teams of UAVs in search and rescue missions.The results are compared to those of a Swarm Intelligence based strategy in the same type of problem as this approach has been widely used within the UAV path-planning field in different variants by many authors. In the Sixth paper, Zhao et al. introduce human intelligence into the computational intelligent algorithms, namely particle swarm optimization (PSO) and immune algorithms (IA). A novel human-computer cooperative PSO-based immune algorithm (HCPSO-IA) is proposed, in which the initial population consists of the initial artificial individuals supplied by human and the initial algorithm individuals are generated by a chaotic strategy. Some new artificial individuals are introduced to replace the inferior individuals of the population. HCPSO-IA benefits by giving free rein to the talents of designers and computers, and contributes to solving complex layout design problems. The experimental results illustrate that the proposed algorithm is feasible and effective. In the sequel, Rebollo-Ruiz and Graña give an extensive empirical evaluation of the innovative nature inspired Gravitational Swarm Intelligence (GSI) algorithm solving the Graph Coloring Problem (GCP). GSI follows Swarm Intelligence problem solving approach, where spatial position of agents are interpreted as problem solutions and agent motion is determined solely by local information, avoiding any central control system. To apply GSI to search for solutions of GCP, the authors map agents to graph's nodes. Agents move as particles in the gravitational field defined by goal objects corresponding to colors. When the agents fall in the gravitational well of the color goal, their corresponding nodes are colored by this color. Graph's connectivity is mapped into a repulsive force between agents corresponding to adjacent nodes. The authors discuss the convergence of the algorithm by testing it over a extensive suite of well-known benchmarking graphs. Comparison of this approach to state-of-the-art approaches in the literature show improvements in many of the benchmark graphs. In the Eighth paper, Macaˇs et al. demonstrates how the novel algorithms can be derived from opinion formation models and empirically demonstrates their usability in the area of binary optimization. Particularly, it introduces a general SITO algorithmic framework and describes four algorithms based on this general framework. Recent applications of these algorithms to pattern recognition in electronic nose, electronic tongue, newborn EEG and ICU patient mortality prediction are discussed. Finally, an open source SITO library for MATLAB and JAVA is introduced. In the final paper, Madureira et al. present a negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence is considered a general aggregation term for several computational techniques which use ideas and get inspiration from the social behaviors of insects and other biological systems. This work is concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. A computational study was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. From the obtained results it was possible to conclude about statistical evidence that negotiation mechanism influence significantly the overall system performance and about advantage of Artificial Bee Colony on effectiveness of makespan minimization and on the machine occupation maximization. We would like to thank our peer-reviewers for their diligent work and efficient efforts.We are also grateful to the Editor-in-Chief of Neurocomputing, Prof. Tom Heskes, for his continued support for the NABIC conference and for the opportunity to organize this Special issue

    Scheduling in multiprocessor system using genetic algorithms

    Get PDF
    Multiprocessors have emerged as a powerful computing means for running real-time applications, especially where a uniprocessor system would not be sufficient enough to execute all the tasks. The high performance and reliability of multiprocessors have made them a powerful computing resource. Such computing environment requires an efficient algorithm to determine when and on which processor a given task should execute. This paper investigates dynamic scheduling of real-time tasks in a multiprocessor system to obtain a feasible solution using genetic algorithms combined with well-known heuristics, such as 'Earliest Deadline First' and 'Shortest Computation Time First'. A comparative study of the results obtained from simulations shows that genetic algorithm can be used to schedule tasks to meet deadlines, in turn to obtain high processor utilization.Peer ReviewedPostprint (published version

    Secure protocol for ad hoc transportation system

    Full text link
    Abstract—We define an ad hoc transportation system as one that has no infrastructure such as roads (and lanes), traffic lights etc. We assume that in such a system the vehicle are autonomic and can guide and direct themselves without a human driver. In this paper we investigate how a safe distance can be maintained between vehicles. A vehicle which has been compromised by an adversary can cause serious chaos and accidents in such a network (a denial of service type of attack). A simple key management scheme is then introduced to ensure secure communications between the components of the system. Keywords–collision avoidance, cyber-physical systems, secure communications I

    GerAmi: Improving Healthcare Delivery in Geriatric Residences

    Get PDF
    Many countries face an ever-growing need to supply constant care and support for their disabled and elderly populations. In this paper, we've developed geriatric ambient intelligence, an intelligent environment that integrates multiagent systems, mobile devices, RFID, and Wi-Fi technologies to facilitate management and control of geriatric residences. At GerAmi's core is the geriatric agent (GerAg), a deliberative agent that incorporates a case-based planning (CBP) mechanism to optimize work schedules and provide up-to-date patient and facility data. We've successfully implemented a system prototype at a care facility for Alzheimer patients

    New trends on soft computing models in industrial and environmental applications

    Get PDF
    The twelve papers included in this special issue represent a selection of extended contributions presented at the Sixth International Conference on Soft Computing Models in Industrial and Environmental Applications, held in Salamanca, Spain, 6–8th April, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and post-graduate students, who are engaged in developing and applying advanced Soft Computing Models to solving real-world problems in the Industrial and Environmental fields. The papers are organized as follows. In the first contribution, Graña and Gonzalez-Acuña develop a formulation of dendritic classifiers based on lattice kernels and train them using a direct Monte Carlo approach and a Sparse Bayesian Learning. The results of both kinds of training are compared with the Relevance Vector Machines on a collection of benchmark datasets. In the second contribution by Irigoyen and Miñano, the authors present the results of the identification of the relationship in time, between the required exercise (machine resistance) and the heart rate of the patient in medical effort tests, using a NARX neural network model. In the experimental stage, test data have been obtained by exercising with a cyclo-ergometer in two different tests: Power Step Response and Conconi. Carneiro et al. in the third contribution present a biologically inspired method to deal with the problem in which genetic algorithms are used to create possible solutions for a given dispute. The approach presented is able to generate a broad number of diverse solutions that cover virtually the whole search space for a given problem. The results of this work are being applied in a negotiation tool that is part of the UMCourt conflict resolution platform. In the fourth contribution by Donate et al., they propose a novel Evolutionary Artificial Neural Networks (EANN) approach, where a weighted n-fold validation fitness scheme is used to build an ensemble of neural networks, under four different combination methods: mean, median, softmax and rank-based combinations. Several experiments were held, using six real-world time series with different characteristics and from distinct domains. Overall, the proposed approach achieved competitive results when compared with non weighted n-fold EANN ensembles, the simpler 0-fold EANN and also the popular Holt–Winters statistical method. Dan Burdescu et al. in the fifth contribution, present a system used in the medical domain for three distinct tasks: image annotation, semantic based image retrieval and content based image retrieval. An original image segmentation algorithm based on a hexagonal structure was used to perform the segmentation of medical images. Image's regions are described using a vocabulary of blobs generated from image features using the K-means clustering algorithm. The annotation and semantic based retrieval task is evaluated for two annotation models: Cross Media Relevance Model and Continuous-space Relevance Model. Semantic based image retrieval is performed using the methods provided by the annotation models. The ontology used by the annotation process was created in an original manner starting from the information content provided by the Medical Subject Headings (MeSH). The experiments were made using a database containing colour images retrieved from medical domain using an endoscope and related to digestive diseases. In sixth paper by Pedraza et al., they develop a face recognition system based on soft computing techniques, which complies with privacy-by-design rules and defines a set of principles that context-aware applications (including biometric sensors) should contain to conform to European and US law. This research deals with the necessity to consider legal issues concerning privacy or human rights in the development of biometric identification in ambient intelligence systems. Clearly, context-based services and ambient intelligence (and the most promising research area in Europe, namely ambient assisted living, ALL) call for a major research effort on new identification procedures. The aim of the research by Redel-Macías et al. in paper seven is to develop a novel model which can be used in pass-by noise test in vehicles based on ensembles of hybrid Evolutionary Product Unit or Radial Basis function Neural Networks (EPUNN or ERBFNNs) at high frequencies. Statistical models and ensembles of hybrid EPUNN and ERBFNN approaches have been used to develop different noise identification models. The results obtained using different ensembles of hybrid EPUNNs and ERBFNNs show that the functional model and the hybrid algorithms proposed provide a very accurate identification compared to other statistical methodologies used to solve this regression problem. In the eighth paper, Wu et al. analyse the existence criterion of loop strategies, and then present some corollaries and theorems, by which the loop strategies and chain strategies can be found, also superfluous strategies and inconsistent strategies. It presents a ranking model that indicates the weak node in strategy set and it also introduces a probability-based model which is the basis of evaluation of strategy. Additionally, this research proposes a method to generate offensive strategy, and the statistic results of simulation game prove the validity of the method. Pop et al. in the ninth paper present an efficient hybrid heuristic algorithm obtained by combining a genetic algorithm (GA) with a local–global approach to the generalized vehicle routing problem (GVRP) and a powerful local search procedure. The computational experiments on several benchmarks instances show that the hybrid algorithm is competitive to all of the known heuristics published to date. In the tenth paper Kramer et al. illustrate how methods from neural computation can serve as forecasting, and monitoring techniques, contributing to a successful integration of wind into sustainable, and smart energy grids. The study is based on the application of kernel methods like support vector regression and kernel density estimation as prediction methods. Furthermore, dimension reduction techniques like self-organizing maps for monitoring of high-dimensional wind time series are applied. The methods are briefly introduced, related work is presented, and experimental case studies are exemplarily described. The experimental parts are based on real wind energy time series data from the NREL western wind resource dataset. Vera et al. in the eleventh contribution present a novel soft computing procedure based on the application of artificial neural networks, genetic algorithms and identification systems, which makes it possible to optimise the implementation conditions in the manufacturing process of high precision parts, including finishing precision, while saving both time and financial costs and/or energy. The novel proposed approach was tested under real dental milling processes using a high precision machining centre with five axes, requiring high finishing precision of measures in micrometres with a large number of process factors to analyse. The results of the experiment, which validate the performance of the proposed approach, are presented in this study. The final contribution, by Sakalauskas and Kriksciuniene, presents a research about financial market efficiency and to recognize major reversal points of long-term trend of stock market index, which could indicate forthcoming crisis or market raise periods. The study suggests a computational model of financial time series analysis, which combines several approaches of soft computing, including information efficiency evaluation methods (Shannon's entropy, Hurst exponent), neural networks and sensitivity analysis. The model aims to derive the aggregated measure for evaluating efficiency of the financial market and to find its interrelationships with the reversal of long-term trend. The radial basis function neural network was designed for forecasting moments of cardinal changes in stock market behaviour, expressed by its entropy values derived from the symbolized time series of stock market index. The performance of neural network model is explored by applying sensitivity analysis and resulted in selecting smoothing parameters of the input variables. The experimental research investigates behaviour of the long-term trend of the three emerging financial markets within NASDAQ OMX Baltic stock exchange. Introduction of information efficiency measures improve ability of the model to recognize the approaching reversal of long-term trend from temporary market “nervousness” and can be useful for calibrating stock trading strategy. First, we would like to thank all the authors for their valuable contributions, which made this special issue possible. We also like to thank our peer-reviewers for their timely diligent work and efficient efforts. We are also grateful to the Editor-in-Chief of Neurocomputing Journal, Prof. Tom Heskes, for his continued support for the SOCO series of conferences and for this Special Issue on this prestigious journal. Finally, we hope the reader will share our joy and find this special issue very useful

    A Swarm-Based Rough Set Approach for Group Decision Support Systems

    Get PDF
    This paper present a class of investment problem, in which many items could be chosen in a group decision environment. Usually there is a decision table from the board of directors after discussions. Most of the data come from their experience or estimation. The information is redundant and inaccurate. Swarm-based rough set approach is introduced to make an attempt to solve the problem. Rough set theory provides a mathematical tool that can be used for both feature selection and information reduction. The swarm-based reduction approaches are attractive to find multiple reducts in the decision systems, which could be applied to generate multiple investment planning and to improve the decision. Empirical results illustrate that the approach can be applied for the class of investment problems effectively

    DIPKIP: A connectionist Knowledge Management System to Identify Knowledge Deficits in Practical Cases

    Get PDF
    This study presents a novel, multidisciplinary research project entitled DIPKIP (data acquisition, intelligent processing, knowledge identification and proposal), which is a Knowledge Management (KM) system that profiles the KM status of a company. Qualitative data is fed into the system that allows it not only to assess the KM situation in the company in a straightforward and intuitive manner, but also to propose corrective actions to improve that situation. DIPKIP is based on four separate steps. An initial “Data Acquisition” step, in which key data is captured, is followed by an “Intelligent Processing” step, using neural projection architectures. Subsequently, the “Knowledge Identification” step catalogues the company into three categories, which define a set of possible theoretical strategic knowledge situations: knowledge deficit, partial knowledge deficit, and no knowledge deficit. Finally, a “Proposal” step is performed, in which the “knowledge processes”—creation/acquisition, transference/distribution, and putting into practice/updating—are appraised to arrive at a coherent recommendation. The knowledge updating process (increasing the knowledge held and removing obsolete knowledge) is in itself a novel contribution. DIPKIP may be applied as a decision support system, which, under the supervision of a KM expert, can provide useful and practical proposals to senior management for the improvement of KM, leading to flexibility, cost savings, and greater competitiveness. The research also analyses the future for powerful neural projection models in the emerging field of KM by reviewing a variety of robust unsupervised projection architectures, all of which are used to visualize the intrinsic structure of high-dimensional data sets. The main projection architecture in this research, known as Cooperative Maximum-Likelihood Hebbian Learning (CMLHL), manages to capture a degree of KM topological ordering based on the application of cooperative lateral connections. The results of two real-life case studies in very different industrial sectors corroborated the relevance and viability of the DIPKIP system and the concepts upon which it is founded
    corecore